[BM] Senior Data Engineer

Pleased to meet you, we are Zallpy.


We are much more than a technology company; we are a diverse, plural, and talented community. Our purpose is to lead digital transformation with excellence and agility, promoting mutual and genuine growth in ethical and long-lasting relationships. Flexibility is one of our trademarks; we operate in different team models and formats, maintaining our light, collaborative, and integrated culture, providing equitable opportunities in a space where everyone feels safe and heard.


What you will do:



  • Apply advanced SQL skills to manipulate and transform data;
  • Work with relational and key-value databases to ensure data quality and integrity;
  • Leverage strong knowledge of dimensional modeling to create efficient data structures;
  • Develop key features for data analysis and modeling;
  • Build efficient data pipelines for data collection, processing, and storage;
  • Utilize APIs to extract data from external sources;
  • Work with tools like Pentaho, Streamsets, or similar for data processing;
  • Use Python and/or Java to implement data engineering solutions;
  • Use Git for version control;
  • Understand Data Platform, Data Warehouse, and Data Lake architecture;
  • Handle SQL and NoSQL databases;
  • Work with cloud services, especially Google Cloud Platform (BigQuery, Cloud Storage, Cloud Functions);
  • Apply knowledge of DataOps tools to enhance ETL processes and Self-Service Analytics;
  • Ensure compliance with the General Data Protection Regulation (GDPR);
  • Demonstrate strong communication, problem-solving, and teamwork skills.

What we are looking for:



  • A bachelor’s or postgraduate degree in Computer Science, Computer Engineering, Statistics, Mathematics, or a related field;
  • Excellent oral and written communication skills in English;
  • Expertise in SQL for data manipulation and transformation;
  • Strong understanding of dimensional modeling;
  • Experience in feature engineering and managing relational and key-value databases;
  • Proven experience in building data pipelines and handling APIs for data extraction;
  • Practical knowledge of ETL/ELT tools like Pentaho, Streamsets, or similar;
  • Proficiency in Python and/or Java for developing data engineering solutions;
  • Familiarity with version control tools, especially Git;
  • Understanding of Data Platform, Data Warehouse, and Data Lake architecture;
  • Experience working with both SQL and NoSQL databases;
  • Experience with Google Cloud;
  • Strong knowledge of DataOps tools to optimize ETL processes and Self-Service Analytics;
  • Familiarity with the General Data Protection Regulation (GDPR);
  • Excellent communication, problem-solving, and teamwork skills

Where you will work: 



  • This is a 100% remote position

Employment type:



  • CLT;
  • Cooperado;
  • PJ.

Our benefits include: 



  • 100% remote work; Meal and/or food allowance in a flexible model (EVA card)*
  • Unimed health insurance for employees and dependents*
  • Uniodonto dental insurance for employees and dependents*
  • Agreements with Educational Institutions for discounts on Undergraduate, Postgraduate, and short courses;
  • Totalpass to take care of physical health;
  • Zenklub to take care of mental health;
  • Life insurance*
  • Daycare assistance for zallpers with children aged 4 months to 6 years, who earn up to three times the minimum wage of the category*
  • Baby Zallpy: a gift to celebrate the birth of zallpers’ babies;
  • Communities: we support the operation of three voluntary zallpers communities: Diversity, Equity & Inclusion, Sports & Movement, and Technology.

 


Benefits valid for CLT type *